Low Rank Correlation Representation and Clustering
نویسندگان
چکیده
منابع مشابه
Symmetric low-rank representation for subspace clustering
We propose a symmetric low-rank representation (SLRR) method for subspace clustering, which assumes that a data set is approximately drawn from the union of multiple subspaces. The proposed technique can reveal the membership of multiple subspaces through the self-expressiveness property of the data. In particular, the SLRR method considers a collaborative representation combined with low-rank ...
متن کاملCorrelation Clustering with Low-Rank Matrices
Correlation clustering is a technique for aggregating data based on qualitative information about which pairs of objects are labeled ‘similar’ or ‘dissimilar.’ Because the optimization problem is NP-hard, much of the previous literature focuses on finding approximation algorithms. In this paper we explore how to solve the correlation clustering objective exactly when the data to be clustered ca...
متن کاملRobust latent low rank representation for subspace clustering
Subspace clustering has found wide applications in machine learning, data mining, and computer vision. Latent Low Rank Representation (LatLRR) is one of the state-of-the-art methods for subspace clustering. However, its effectiveness is undermined by a recent discovery that the solution to the noiseless LatLRR model is non-unique. To remedy this issue, we propose choosing the sparest solution i...
متن کاملSubspace clustering using a symmetric low-rank representation
In this paper, we propose a low-rank representation with symmetric constraint (LRRSC) method for robust subspace clustering. Given a collection of data points approximately drawn from multiple subspaces, the proposed technique can simultaneously recover the dimension and members of each subspace. LRRSC extends the original low-rank representation algorithm by integrating a symmetric constraint ...
متن کاملLow rank subspace clustering (LRSC)
We consider the problem of fitting a union of subspaces to a collection of data points drawn from one or more subspaces and corrupted by noise and/or gross errors. We pose this problem as a non-convex optimization problem, where the goal is to decompose the corrupted data matrix as the sum of a clean and self-expressive dictionary plus a matrix of noise and/or gross errors. By self-expressive w...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Scientific Programming
سال: 2021
ISSN: 1875-919X,1058-9244
DOI: 10.1155/2021/6639582